Why learning law really is a complex business
Julian Webb, University of Westminster
Funded by UKCLE, the Phineas Gage Group is investigating current brain and behaviour research and its application to learning and teaching law. This paper introduced the concept of complexity theory and showed how it offers a significantly different conception of learning from much established learning theory. (Note: Julian Webb is now UKCLE Director.)
bq. “ In discussions of complexity, the human brain has always held a special place, not only because of its own structural complexity, but also because of its capability to deal with complexity. “
(Paul Cilliers, Complexity and postmodernism, Routledge, 1998, p16)
Complexity theory has its earliest origins in studies of self organisation within genetic and other biological systems, but the notion of complexity has become a powerful metaphor and conceptual device for discussing the emergence and development of systems in both the pure and the social sciences. ‘Connectionist’ and ‘complexity theory’ approaches to studying the brain and artificial neural networks raise important questions for our understanding of how the education system works and – perhaps more importantly – offer some tantalising insights into why it often works less well than we hope.
In this session Julian introduced the concept of complexity theory and showed how it offers a significantly different conception of learning from much established learning theory. Specifically, he suggested that in order to understand legal education as a complex system we must engage in a paradigm shift, characterised in terms of a shift in our understanding of learning from:
- a linear to non-linear, recursive, process
- convergent to divergent
- atomistic to relational
- uni-dimensional to multi-dimensional
- intentional to ‘messy’, random and unpredictable
h3. Introduction
Complexity theory is a new way of looking at systems. It has emerged over the last 20 years or so (see Kauffman, 1990, 1992) from an almost primordial trans-disciplinary soup of studies of self organisation within genetic and other biological systems, and in parallel developments in the natural and (latterly) social sciences. These studies have encompassed fields as apparently diverse as cybernetics and artificial intelligence, quantum physics, the neurosciences, organisational management and economic and social theory. Even in law, a theory of legal autopoiesis has developed from the work, chiefly, of two German scholars, the sociologist Niklas Luhmann and the jurist Gunther Teubner. Since its emergence in the 1980s this has become an increasingly influential, but still primarily Euro-centric, branch of legal theory, which draws heavily on concepts developed first in the study of living systems.
The concepts and methods of complexity theory are also starting to be used by educationalists (M Bar-Yam et al: nd). The education system itself can be seen as a complex system, with intricate interdependencies and many diverse factors affecting the interaction of its parts, in ways that are often difficult to predict, and educational theory is starting to use the language of complexity theory as a tool both for understanding how the education system works and for strategising change within the system.
In this paper I want to do three things: to introduce some basic idea and concepts of complexity theory, to look at how we can apply complexity thinking to (legal) education and to help us consider the features of legal education that complexity theory suggests, perhaps, are not working as well as they might.
Connectionism and the concept of complexity
The idea of a simple definition of complexity teeters on the brink of the oxymoronic, but most complexity theorists seem to agree that there are a number of relatively simple concepts fundamental to our understanding of complex systems. The particular formulation of complexity theory I intend to use today draws heavily, though not exhaustively, on work on neural networks and the so-called ‘connectionist’ principles derived from network theory. This isn’t, as I have said, the only source of complexity theory, but it is a branch which has obvious and strong links to issues of learning and cognition; it has been an important part of my own way-in to complexity theory and so I will use it primarily as my exemplar today. Part of the attraction for me is also that some versions of connectionism also have interesting linkages with the kind of constructivist and phenomenological thinking that has influenced my own work developing more ‘holistic’ learning theories for law.
What is ‘connectionism’
‘Connectionism’, ‘neural networks’ and ‘parallel distributed processing’ (PDP), are all names for a method of computation that attempts to model the neural processes of the human brain.
Connectionism claims to be able to approximate the kind of spontaneous, creative and somewhat unpredictable behaviour of human agents in a way that conventional methods used by AI researchers, relying on classical ‘representational’ theory, cannot (Davis, 1992; Churchland, 1995).
The classical model treats all cognitive processes as the result of an enormous number of syntactically driven operations – ie in simple terms, it treats ‘intelligent behaviour’ as a species of rule following. Connectionist models rely on the neurally inspired approach of PDP. A PDP network involves a collection of simple processing units (we can think of them as neurons) which are linked through a series of levels. The connection lines are critical, since it is they, not the neurons, which incorporate modifiable values (called weights) which determine the strength of the connection between neurons – this models the synaptic connections in the brain. The system functions by each neuron continuously calculating its output in parallel with all the others, with patterns of activity developing depending on the modulating effect of the weights. Over time these patterns gradually relax into a stable pattern of activation in response to the inputs received. The values of the weights are determined by a learning rule. In many experimental models, the rule is one called back-propagation – the system is put through a training phase in which it is presented with a set of inputs and a set of outputs, and the weights are adjusted through the intermediate levels of neurons.
Through multiple iterations the system learns to generate the patterns which enables it to match inputs to outputs. To give an example, in one of the early models the inputs represented the present tenses of English regular and irregular verbs, the outputs the past tenses (Rumelhart & McClelland, 1986:216-71). Over the iterations the system settled on a collection of weights that captured most of the characteristics of both regular and irregular verbs, until it was able to respond appropriately to verbs it had not encountered before. Interestingly, its learning process closely mirrored the features of language learning exhibited by children.
It has to be said that connectionist claims are still subject to challenge, particularly as to the ability of connectionism to explain higher cognitive functions. As Dennett (1991:268) has observed, PDP undoubtedly moves cognitive modelling closer to neural modelling, but there is still a substantial _terra incognita _between the mind sciences and the brain sciences. However there is now also a growing body of ‘fusionist’ scholarship that is attempting to narrow the gap, in the process suggesting that network models are indeed a stronger basis for explaining cognition than the classical representational models alone (Marcus, 2003; O’Brien & Opie, 1997, 1999). Whoever is right, it is a debate that ultimately raises important issues for our understanding of human development. Its way of accommodating both systemic genetic and environmental features potentially reconciles many issues in the old nature-nurture debate about human development. Various aspects of cognition and information processing, including language acquisition, concepts of conscious and implicit learning, processes of pattern recognition and our understanding of imagination and creativity can all usefully be discussed within the framework of connectionist models (Y Bar-Yam, 1997, ch 3) The discussion is thus one that educationalists need to come to terms with. But this is not my primary focus for today. What I am more interested in, as I have said, are the _systemic _features of neural networks, because it is at the systemic level that connectionism tells us some useful things about complexity more generally (see, further, Bar-Yam, 1997; Cilliers, 1998). Indeed, it is tempting to see the developed neural network as a paradigm complex system.
We can illustrate this by identifying those features of PDPs which appear increasingly to be treated by complexity theorists as generic features of complex systems:
‘Memory’ or ‘knowledge’ does not reside in any single neuron, but only in the relationship between neurons – it is, in the jargon, distributed.
The network uses many essentially simple components which are richly interconnected and thus able to undertake quite complex activities (ie it is their interconnectedness or _relationality _that enables them to deal with complexity). (But this feature also limits both the comprehensibility of the system to any individual agent, and the ability to predict the influence that any individual agent has – cf the classical order at the edge of chaos arguments – Kauffman, 1990.)
These interactions are in the form of complex patterns that are generated by the system itself – the system is, to a degree, self organising and its patterns are emergent properties of the interactions. This idea of emergence is of singular importance to complexity theory. Emergent properties are different from what we conventionally think of as properties: they are dynamic, often more than the sum of the parts (think about ‘love’ as an emergent property!) – it cannot be analysed by conventional means (though some of its manifestation can be), it does not readily yield to conventional causal explanation (try explaining why an attraction does/does not lead to love), and often fundamentally unpredictable.
The relationality of complex systems also raises one other critical point for learning theory: the PDP research shows that learning in such systems is not rule-based in any explicit sense: the learning rule is merely a description of a relationship between inputs and outputs, it is not prescriptive in the representational sense. The model of the mind (and of language) can be approximately described by rules, but that is not the same thing; these rules are post hoc descriptions rather than ‘true’ representations of how the mind works – the mind, this suggests, works in ways that are relational rather than representational, a notion which, if taken seriously could have significant implications for our understanding of things like the learning of associations.
Now, lets apply this thinking rather more directly to the system and process of legal education. Part of my thesis of course is that process cannot be considered independently of the system of which it is part – something which much of our discussion of pedagogy is rather prone to do. The system of legal education, in turn, of course does not exist in isolation, it exists within an environment over which it may have some influence but little control. By definition, a system only has control over elements that are part of the system. In this context, simply changing our pedagogy without getting some clarity about what our system actually involves is unlikely, therefore, to prove a strategy for success. I will begin to address this by focussing on what I see as a set of key myths that legal education, by failing to acknowledge its own complexity, continues to foster, to the possible detriment of the learning process. Most of these myths have been challenged at one point or another by legal educationalists or other writers on higher education more generally, so few of my points are by themselves original:
Myth 1: Learning is uni-dimensional
Legal education as a system, like most of higher education, continues to behave as if learning equated to knowledge acquisition. Of course, it does not. Learning is multi-dimensional, both as a system and as a process. We are increasingly aware of the process dimensions – how the efficacy of even traditional learning depends critically on cognitive, affective and connative capacities of the learner, but any learning system also needs to address the range of learning that goes to enhance individual capability and develop potential. These elements can be defined as
- learning to learn – any learning system needs to equip its learners with the tools they need to be effective, and ultimately, independent learners
- learning to do – a learning system ought to be concerned with capability and performativity – this head thus encompasses both ‘know what’ and ‘know how’; perhaps also ‘know why’
- learning to be – ie cultural, aesthetic and moral education. It addresses motivations, values and desires – the process of becoming rather than doing or having.
In the present system these elements are widely disassociated – in the language shared by systems theory and work on cognitive architecture, they are ‘structurally de-coupled’ – subject, in the jargon, to buffering, connectivity decompositions and the closure of potential feedback loops.
Myth 2: Learning is linear
The systemic part of this problem we are familiar with; we treat the education system as a developmental whole whereas it is really full of developmental holes – a systemic as opposed to processual case of structural de-coupling. The Law Society’s recent Training framework review is the latest, and possibly one of the more promising, attempts to address some of the holes within the legal education system itself.
Processually, however, we also make the assumption that learning as a very linear input-output model. Is it? Or is learning much more non-linear, recursive and distributed than we assume? Work on the PDP models themselves and on concepts like implicit learning suggest we need to take non-linear learning much more seriously than we have.
Myth 3: We can just ‘teach’ propositional knowledge
Not if learning is inherrently multi-dimensional we can’t – see Myth 1.
Myth 4: All meaningful learning is planned
We operate in an environment which emphasises the importance of planned learning; notions like systematic instructional design, the outcomes orthodoxy and even the new technology of intentional learning theory (which reflects the view that it is intentionality that turns learning into an intended goal rather than an incidental outcome of an activity – Bereiter and Scardamalia, 1993:3), can be seen as part of this culture. While I would not deny that planned learning is important, nor would I suggest that we cannot plan to build surprise (for our students) into our courses
But are we sometimes guilty of trying to make learning too structured and predictable? Apart from the fact that it can make learning a very serious business, it values only those outcomes that we can see are relevant in the here and now – what about the things we don’t know that we don’t know? Yes, it is some improvement that we structure our teaching and learning more precisely so that we have clear objectives, and outcomes that our students know, understand, and hopefully value. But then we also bemoan the lack of originality and creativity that they show in response to our carefully mapped out learning process…maybe that’s what happens when you end up designing out those messy, unpredictable moments in learning that take us all by surprise.
Myth 5: Convergent learning works for everyone (more or less)
This rather follows on from my last point. Convergent teaching focuses on delivery of specific subject matter. It tends to be highly structured and relatively teacher centred. It contrasts with divergent teaching and learning, which stresses more open ended, flexible and student centred approaches. We know our students come from increasingly diverse educational and cultural backgrounds and have different preferences in terms of learning styles, yet the tendency throughout the education system (not just in law) is still towards the convergent end of the scale. With the proliferation of new forms of knowledge, with demands for creativity and autonomy amongst learners, we need to embrace the potential for diversity within a complex education system and do more to embrace both convergent and divergent styles. Indeed complexity theory suggests both that a complex system can and should support a plurality of structures and processes and that we should begin to view these styles not as mutually exclusive, but as functionally related and interdependent.
Myth 6: The learner as atomistic individual reigns supreme
I probably don’t need to say much about this last point – we are increasingly aware of the importance of groups and collaborative learning, though they remain a challenge to our very individualistic assessment culture. Again, complexity theory emphasises the importance of what we might call relational forms of learning. This forms an important part of our ‘learning to be’, and work on living systems and the development of co-operation in human society emphasises the need to understand both the importance and the complexity of co-operation (for example Axelrod’s the Evolution of co-operation _and _Complexity of co-operation).
Where next?
I am conscious that I have constructed a somewhat sweeping agenda, but I don’t apologise for that. Complexity theory warns us that complex systems are not amenable to simple fixes. So, my question to you; what might legal education look like if we took complexity seriously?
Julian is a Professor in the School of Law at Westminster. His research interests include the legal profession, legal ethics and legal education. Julian is a member of the project team examining Applying behavioural science to legal education.
Last Modified: 12 July 2010
Comments
There are no comments at this time